61 research outputs found

    Efficient on the fly maintenance of series-parallel relationships

    Get PDF
    A series-parallel directed acyclic graph, or SP-dag, contains nodes that are either in series or logically in parallel. We present a data structure and algorithm to efficiently determine, in a single serial walk of the dag, whether two nodes are logically in parallel. We also present a modified version of this algorithm to detect parallel threads in any (parallel or serial) execution of a Cilk dag. The techniques we present in this paper depend on an order-maintenance data structure inspired by Dietz and Sleator. This data structure supports inserts and queries in O(1) amortized time. We maintain two complementary total-orders of the dag. If two nodes have the same relationship in both orders, then they operate in series. If they have different relationships in both orders, then they operate logically in parallel. The algorithm we use allows us to maintain both orders on a single, serial walk of the dag. Our algorithm takes O(T) time, where T is the time to execute a serial walk of the dag. The Dietz and Sleator order-maintenance structure does not support concurrent operations. Given the work-first property of the Cilk scheduler with a bounded number of steals (with high probability), we can maintain separate order structures for each processor in addition to a global order structure. Concurrent operations are only required in the global order structure on a steal. We prove that a Cilk program modified with our algorithm has a running time bounded to within a constant factor of the original program. Determinacy race detection depends on knowledge of the SP relationships of a parallel-program. We will apply the serial algorithm mentioned above to a determinacy race detector in Cilk. We will run benchmarks to compare the running time of this implementation to that of the current Nondeterminator, which relies on least common ancestor lookups.Singapore-MIT Alliance (SMA

    Optimizing the Dimensional Method for Performing Multidimensional, Multiprocessor, Out-of-Core FFTs

    Get PDF
    We present an improved version of the Dimensional Method for computing multidimensional Fast Fourier Transforms (FFTs) on a multiprocessor system when the data consist of too many records to fit into memory. Data are spread across parallel disks and processed in sections. We use the Parallel Disk Model for analysis. The simple Dimensional Method performs the 1-dimensional FFTs for each dimension in term. Between each dimension, an out-of-core permutation is used to rearrange the data to contiguous locations. The improved Dimensional Method processes multiple dimensions at a time. We show that determining an optimal sequence and groupings of dimensions is NP-complete. We then analyze the effects of two modifications to the Dimensional Method independently: processing multiple dimensions at one time, and processing single dimensions in a different order. Finally, we show a lower bound on the I/O complexity of the Dimensional Method and present an algorithm that is approximately asymptotically optimal

    Single-Source Shortest Paths with Negative Real Weights in O~(mn8/9)\tilde{O}(mn^{8/9}) Time

    Full text link
    This paper presents a randomized algorithm for the problem of single-source shortest paths on directed graphs with real (both positive and negative) edge weights. Given an input graph with nn vertices and mm edges, the algorithm completes in O~(mn8/9)\tilde{O}(mn^{8/9}) time with high probability. For real-weighted graphs, this result constitutes the first asymptotic improvement over the classic O(mn)O(mn)-time algorithm variously attributed to Shimbel, Bellman, Ford, and Moore

    Algorithms incorporating concurrency and caching

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 189-203).This thesis describes provably good algorithms for modern large-scale computer systems, including today's multicores. Designing efficient algorithms for these systems involves overcoming many challenges, including concurrency (dealing with parallel accesses to the same data) and caching (achieving good memory performance.) This thesis includes two parallel algorithms that focus on testing for atomicity violations in a parallel fork-join program. These algorithms augment a parallel program with a data structure that answers queries about the program's structure, on the fly. Specifically, one data structure, called SP-ordered-bags, maintains the series-parallel relationships among threads, which is vital for uncovering race conditions (bugs) in the program. Another data structure, called XConflict, aids in detecting conflicts in a transactional-memory system with nested parallel transactions. For a program with work T and span To, maintaining either data structure adds an overhead of PT, to the running time of the parallel program when executed on P processors using an efficient scheduler, yielding a total runtime of O(T1/P + PTo). For each of these data structures, queries can be answered in 0(1) time. This thesis also introduces the compressed sparse rows (CSB) storage format for sparse matrices, which allows both Ax and ATx to be computed efficiently in parallel, where A is an n x n sparse matrix with nnz > n nonzeros and x is a dense n-vector. The parallel multiplication algorithm uses e(nnz) work and ... span, yielding a parallelism of ... , which is amply high for virtually any large matrix.(cont.) Also addressing concurrency, this thesis considers two scheduling problems. The first scheduling problem, motivated by transactional memory, considers randomized backoff when jobs have different lengths. I give an analysis showing that binary exponential backoff achieves makespan V2e(6v 1- i ) with high probability, where V is the total length of all n contending jobs. This bound is significantly larger than when jobs are all the same size. A variant of exponential backoff, however, achieves makespan of ... with high probability. I also present the size-hashed backoff protocol, specifically designed for jobs having different lengths, that achieves makespan ... with high probability. The second scheduling problem considers scheduling n unit-length jobs on m unrelated machines, where each job may fail probabilistically. Specifically, an input consists of a set of n jobs, a directed acyclic graph G describing the precedence constraints among jobs, and a failure probability qij for each job j and machine i. The goal is to find a schedule that minimizes the expected makespan. I give an O(log log(min {m, n}))-approximation for the case of independent jobs (when there are no precedence constraints) and an O(log(n + m) log log(min {m, n}))-approximation algorithm when precedence constraints form disjoint chains. This chain algorithm can be extended into one that supports precedence constraints that are trees, which worsens the approximation by another log(n) factor. To address caching, this thesis includes several new variants of cache-oblivious dynamic dictionaries.(cont.) A cache-oblivious dictionary fills the same niche as a classic B-tree, but it does so without tuning for particular memory parameters. Thus, cache-oblivious dictionaries optimize for all levels of a multilevel hierarchy and are more portable than traditional B-trees. I describe how to add concurrency to several previously existing cache-oblivious dictionaries. I also describe two new data structures that achieve significantly cheaper insertions with a small overhead on searches. The cache-oblivious lookahead array (COLA) supports insertions/deletions and searches in O((1/B) log N) and O(log N) memory transfers, respectively, where B is the block size, M is the memory size, and N is the number of elements in the data structure. The xDict supports these operations in O((1/1B E1-) logB(N/M)) and O((1/)0logB(N/M)) memory transfers, respectively, where 0 < E < 1 is a tunable parameter. Also on caching, this thesis answers the question: what is the worst possible page-replacement strategy? The goal of this whimsical chapter is to devise an online strategy that achieves the highest possible fraction of page faults / cache misses as compared to the worst offline strategy. I show that there is no deterministic strategy that is competitive with the worst offline. I also give a randomized strategy based on the most recently used heuristic and show that it is the worst possible pagereplacement policy. On a more serious note, I also show that direct mapping is, in some sense, a worst possible page-replacement policy. Finally, this thesis includes a new algorithm, following a new approach, for the problem of maintaining a topological ordering of a dag as edges are dynamically inserted.(cont.) The main result included here is an O(n2 log n) algorithm for maintaining a topological ordering in the presence of up to m < n(n - 1)/2 edge insertions. In contrast, the previously best algorithm has a total running time of O(min { m3/ 2, n5/2 }). Although these algorithms are not parallel and do not exhibit particularly good locality, some of the data structural techniques employed in my solution are similar to others in this thesis.by Jeremy T. Fineman.Ph.D

    Provably good race detection that runs in parallel

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 93-98).A multithreaded parallel program that is intended to be deterministic may exhibit nondeterminism clue to bugs called determinacy races. A key capability of race detectors is to determine whether one thread executes logically in parallel with another thread or whether the threads must operate in series. This thesis presents two algorithms, one serial and one parallel, to maintain the series-parallel (SP) relationships "on the fly" for fork-join multithreaded programs. For a fork-join program with T1 work and a critical-path length of T[infinity], the serial SP-Maintenance algorithm runs in O(T1) time. The parallel algorithm executes in the nearly optimal O(T1/P + PT[infinity]) time, when run on P processors and using an efficient scheduler. These SP-maintenance algorithms can be incorporated into race detectors to get a provably good race detector that runs in parallel. This thesis describes an efficient parallel race detector I call Nondeterminator-3. For a fork-join program T1 work, critical-path length T[infinity], and v shared memory locations, the Nondeterminator-3 runs in O(T1/P + PT[infinity] lg P + min [(T1 lg P)/P, vT[infinity] Ig P]) expected time, when run on P processors and using an efficient scheduler.by Jeremy T. Fineman.S.M

    Reallocation Problems in Scheduling

    Full text link
    In traditional on-line problems, such as scheduling, requests arrive over time, demanding available resources. As each request arrives, some resources may have to be irrevocably committed to servicing that request. In many situations, however, it may be possible or even necessary to reallocate previously allocated resources in order to satisfy a new request. This reallocation has a cost. This paper shows how to service the requests while minimizing the reallocation cost. We focus on the classic problem of scheduling jobs on a multiprocessor system. Each unit-size job has a time window in which it can be executed. Jobs are dynamically added and removed from the system. We provide an algorithm that maintains a valid schedule, as long as a sufficiently feasible schedule exists. The algorithm reschedules only a total number of O(min{log^* n, log^* Delta}) jobs for each job that is inserted or deleted from the system, where n is the number of active jobs and Delta is the size of the largest window.Comment: 9 oages, 1 table; extended abstract version to appear in SPAA 201

    The Online House Numbering Problem: Min-Max Online List Labeling

    Get PDF
    We introduce and study the online house numbering problem, where houses are added arbitrarily along a road and must be assigned labels to maintain their ordering along the road. The online house numbering problem is related to classic online list labeling problems, except that the optimization goal here is to minimize the maximum number of times that any house is relabeled. We provide several algorithms that achieve interesting tradeoffs between upper bounds on the number of maximum relabels per element and the number of bits used by labels
    • …
    corecore